71 research outputs found

    Dissociation between the Activity of the Right Middle Frontal Gyrus and the Middle Temporal Gyrus in Processing Semantic Priming

    Get PDF
    The aim of this event-related functional magnetic resonance imaging (fMRI) study was to test whether the right middle frontal gyrus (MFG) and middle temporal gyrus (MTG) would show differential sensitivity to the effect of prime-target association strength on repetition priming. In the experimental condition (RP), the target occurred after repetitive presentation of the prime within an oddball design. In the control condition (CTR), the target followed a single presentation of the prime with equal probability of the target as in RP. To manipulate semantic overlap between the prime and the target both conditions (RP and CTR) employed either the onomatopoeia “oink” as the prime and the referent “pig” as the target (OP) or vice-versa (PO) since semantic overlap was previously shown to be greater in OP. The results showed that the left MTG was sensitive to release of adaptation while both the right MTG and MFG were sensitive to sequence regularity extraction and its verification. However, dissociated activity between OP and PO was revealed in RP only in the right MFG. Specifically, target “pig” (OP) and the physically equivalent target in CTR elicited comparable deactivations whereas target “oink” (PO) elicited less inhibited response in RP than in CTR. This interaction in the right MFG was explained by integrating these effects into a competition model between perceptual and conceptual effects in priming processing

    Ultra-Rapid Categorization of Fourier-Spectrum Equalized Natural Images: Macaques and Humans Perform Similarly

    Get PDF
    BACKGROUND: Comparative studies of cognitive processes find similarities between humans and apes but also monkeys. Even high-level processes, like the ability to categorize classes of object from any natural scene under ultra-rapid time constraints, seem to be present in rhesus macaque monkeys (despite a smaller brain and the lack of language and a cultural background). An interesting and still open question concerns the degree to which the same images are treated with the same efficacy by humans and monkeys when a low level cue, the spatial frequency content, is controlled. METHODOLOGY/PRINCIPAL FINDINGS: We used a set of natural images equalized in Fourier spectrum and asked whether it is still possible to categorize them as containing an animal and at what speed. One rhesus macaque monkey performed a forced-choice saccadic task with a good accuracy (67.5% and 76% for new and familiar images respectively) although performance was lower than with non-equalized images. Importantly, the minimum reaction time was still very fast (100 ms). We compared the performances of human subjects with the same setup and the same set of (new) images. Overall mean performance of humans was also lower than with original images (64% correct) but the minimum reaction time was still short (140 ms). CONCLUSION: Performances on individual images (% correct but not reaction times) for both humans and the monkey were significantly correlated suggesting that both species use similar features to perform the task. A similar advantage for full-face images was seen for both species. The results also suggest that local low spatial frequency information could be important, a finding that fits the theory that fast categorization relies on a rapid feedforward magnocellular signal

    Task and spatial frequency modulations of object processing: an EEG study.

    Get PDF
    Visual object processing may follow a coarse-to-fine sequence imposed by fast processing of low spatial frequencies (LSF) and slow processing of high spatial frequencies (HSF). Objects can be categorized at varying levels of specificity: the superordinate (e.g. animal), the basic (e.g. dog), or the subordinate (e.g. Border Collie). We tested whether superordinate and more specific categorization depend on different spatial frequency ranges, and whether any such dependencies might be revealed by or influence signals recorded using EEG. We used event-related potentials (ERPs) and time-frequency (TF) analysis to examine the time course of object processing while participants performed either a grammatical gender-classification task (which generally forces basic-level categorization) or a living/non-living judgement (superordinate categorization) on everyday, real-life objects. Objects were filtered to contain only HSF or LSF. We found a greater positivity and greater negativity for HSF than for LSF pictures in the P1 and N1 respectively, but no effects of task on either component. A later, fronto-central negativity (N350) was more negative in the gender-classification task than the superordinate categorization task, which may indicate that this component relates to semantic or syntactic processing. We found no significant effects of task or spatial frequency on evoked or total gamma band responses. Our results demonstrate early differences in processing of HSF and LSF content that were not modulated by categorization task, with later responses reflecting such higher-level cognitive factors

    Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection

    Get PDF
    Background: Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings: Participants completed an object detection task in which they made an object-presence or-absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d9). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance: Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affect

    Event-Related Potentials Reveal Rapid Verification of Predicted Visual Input

    Get PDF
    Human information processing depends critically on continuous predictions about upcoming events, but the temporal convergence of expectancy-based top-down and input-driven bottom-up streams is poorly understood. We show that, during reading, event-related potentials differ between exposure to highly predictable and unpredictable words no later than 90 ms after visual input. This result suggests an extremely rapid comparison of expected and incoming visual information and gives an upper temporal bound for theories of top-down and bottom-up interactions in object recognition

    Valence-Specific Modulation in the Accumulation of Perceptual Evidence Prior to Visual Scene Recognition

    Get PDF
    Visual scene recognition is a dynamic process through which incoming sensory information is iteratively compared with predictions regarding the most likely identity of the input stimulus. In this study, we used a novel progressive unfolding task to characterize the accumulation of perceptual evidence prior to scene recognition, and its potential modulation by the emotional valence of these scenes. Our results show that emotional (pleasant and unpleasant) scenes led to slower accumulation of evidence compared to neutral scenes. In addition, when controlling for the potential contribution of non-emotional factors (i.e., familiarity and complexity of the pictures), our results confirm a reliable shift in the accumulation of evidence for pleasant relative to neutral and unpleasant scenes, suggesting a valence-specific effect. These findings indicate that proactive iterations between sensory processing and top-down predictions during scene recognition are reliably influenced by the rapidly extracted (positive) emotional valence of the visual stimuli. We interpret these findings in accordance with the notion of a genuine positivity offset during emotional scene recognition

    Learning to look the other way

    No full text

    Smooth pursuit under stimulus-response uncertainty does not follow Hick’s law

    No full text
    Simple reaction times (RTs) are typically faster than choice reaction times and increase with uncertainty according to Hick’s law. Here we show that smooth pursuit eye movement RTs show no effect of SR uncertainty while joystick tracking shows a step change between SRT and CRT, but no significant increases beyond two choices. The results suggest there is a benefit to pre-programming joystick tracking but not for smooth pursuit eye movements (SPEMs)
    corecore